On functions with zero mean over a finite group

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Zero-cycles on varieties over finite fields

For any field k, Milnor [Mi] defined a sequence of groups K 0 (k), K M 1 (k), K M 2 (k), . . . which later came to be known as Milnor K-groups. These were studied extensively by Bass and Tate [BT], Suslin [Su], Kato [Ka1], [Ka2] and others. In [Som], Somekawa investigates a generalization of this definition proposed by Kato: given semi-abelian varieties G1, . . . , Gs over a field k, there is a...

متن کامل

Bent functions on a finite nonabelian group

We introduce the notion of a bent function on a finite nonabelian group which is a natural generalization of the well-known notion of bentness on a finite abelian group due to Logachev, Salnikov and Yashchenko. Using the theory of linear representations and noncommutative harmonic analysis of finite groups we obtain several properties of such functions similar to the corresponding properties of...

متن کامل

On the Maximal Cross Number of Unique Factorization Zero-sum Sequences over a Finite Abelian Group

Let S = (g1, · · · , gl) be a sequence of elements from an additive finite abelian group G, and let

متن کامل

Shifting Mean Activation Towards Zero with Bipolar Activation Functions

We propose a simple extension to the ReLU-family of activation functions that allows them to shift the mean activation across a layer towards zero. Combined with proper weight initialization, this alleviates the need for normalization layers. We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning i...

متن کامل

Shifting Mean Activation towards Zero with Bipolar Activation Functions

We propose a simple extension to the ReLU-family of activation functions that allows them to shift the mean activation across a layer towards zero. Combined with proper weight initialization, this alleviates the need for normalization layers. We explore the training of deep vanilla recurrent neural networks (RNNs) with up to 144 layers, and show that bipolar activation functions help learning i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Functional Analysis and Its Applications

سال: 1997

ISSN: 0016-2663,1573-8485

DOI: 10.1007/bf02466011